Skip to content
This repository was archived by the owner on Jul 4, 2025. It is now read-only.

Conversation

@vansangpfiev
Copy link
Contributor

@vansangpfiev vansangpfiev commented Dec 26, 2024

  • For vision models, the application utilizes a dedicated, customized server that runs within the same process as the main application.
  • To handle text and embedding models, the application spawns a separate child process for each model.

Issue: janhq/cortex.cpp#1728

@vansangpfiev vansangpfiev force-pushed the feat/use-llama-cpp-server branch 11 times, most recently from 1d900b3 to 48d0455 Compare December 27, 2024 06:34
@vansangpfiev vansangpfiev force-pushed the feat/use-llama-cpp-server branch from 48d0455 to b1fe9a9 Compare December 27, 2024 06:57
@vansangpfiev vansangpfiev force-pushed the feat/use-llama-cpp-server branch from 9020e46 to 7cb0706 Compare December 31, 2024 04:00
@vansangpfiev vansangpfiev force-pushed the feat/use-llama-cpp-server branch from 7cb0706 to 051e9d6 Compare December 31, 2024 06:29
@vansangpfiev vansangpfiev marked this pull request as ready for review January 2, 2025 02:13
@vansangpfiev vansangpfiev force-pushed the feat/use-llama-cpp-server branch from 8a9f42d to aefc495 Compare January 2, 2025 07:08
@vansangpfiev vansangpfiev force-pushed the feat/use-llama-cpp-server branch from 399d01f to ba7e5af Compare January 13, 2025 00:36
@vansangpfiev vansangpfiev force-pushed the feat/use-llama-cpp-server branch from 8d06985 to 565cb58 Compare February 5, 2025 15:45
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants